Generative adversarial networks (GANs) are a class of machine-learning models that use adversarial training to generate new samples with the same (potentially very complex) statistics as the training samples. One major form of training failure, known as mode collapse, involves the generator failing to reproduce the full diversity of modes in the target probability distribution. Here, we present an effective model of GAN training, which captures the learning dynamics by replacing the generator neural network with a collection of particles in the output space; particles are coupled by a universal kernel valid for certain wide neural networks and high-dimensional inputs. The generality of our simplified model allows us to study the conditions under which mode collapse occurs. Indeed, experiments which vary the effective kernel of the generator reveal a mode collapse transition, the shape of which can be related to the type of discriminator through the frequency principle. Further, we find that gradient regularizers of intermediate strengths can optimally yield convergence through critical damping of the generator dynamics. Our effective GAN model thus provides an interpretable physical framework for understanding and improving adversarial training.
translated by 谷歌翻译
随着自动化许多具有高保真性的化学任务的前景,化学语言处理模型正在快速迅速出现。在这里,我们提出了一个基于云的实时平台,该平台允许用户实际上筛选感兴趣的分子。为此,将杠杆化从最近提出的大型化学语言模型(名为Moleformer)推断出来的分子嵌入。该平台目前支持三个任务:最近的邻居检索,化学空间可视化和财产预测。根据该平台的功能并获得的结果,我们认为这样的平台可以在自动化化学和化学工程研究中起关键作用,并协助药物发现和材料设计任务。在\ url {www.ibm.biz/molecular_demo}提供我们平台的演示。
translated by 谷歌翻译
Learning high-dimensional distributions is often done with explicit likelihood modeling or implicit modeling via minimizing integral probability metrics (IPMs). In this paper, we expand this learning paradigm to stochastic orders, namely, the convex or Choquet order between probability measures. Towards this end, exploiting the relation between convex orders and optimal transport, we introduce the Choquet-Toland distance between probability measures, that can be used as a drop-in replacement for IPMs. We also introduce the Variational Dominance Criterion (VDC) to learn probability measures with dominance constraints, that encode the desired stochastic order between the learned measure and a known baseline. We analyze both quantities and show that they suffer from the curse of dimensionality and propose surrogates via input convex maxout networks (ICMNs), that enjoy parametric rates. We provide a min-max framework for learning with stochastic orders and validate it experimentally on synthetic and high-dimensional image generation, with promising results. Finally, our ICMNs class of convex functions and its derived Rademacher Complexity are of independent interest beyond their application in convex orders.
translated by 谷歌翻译
概率分布之间的差异措施是统计推理和机器学习的核心。在许多应用中,在不同的空格上支持感兴趣的分布,需要在数据点之间进行有意义的对应。激励明确地将一致的双向图编码为差异措施,这项工作提出了一种用于匹配的新型不平衡的Monge最佳运输制剂,达到异构体,在不同空间上的分布。我们的配方由于公制空间之间的Gromov-Haussdrow距离而受到了原则放松,并且采用了两个周期一致的地图,将每个分布推向另一个分布。我们研究了拟议的差异的结构性,并且特别表明它将流行的循环一致的生成对抗网络(GaN)框架捕获为特殊情况,从而提供理论解释它。通过计算效率激励,然后我们将差异括起来并将映射限制为参数函数类。由此产生的核化版本被创建为广义最大差异(GMMD)。研究了GMMD的经验估计的收敛速率,并提供了支持我们理论的实验。
translated by 谷歌翻译
我们展示了一个“物理增强的深替代(”PEDS“)方法朝着部分微分方程(PDE)和类似型号描述的复杂物理系统开发快速替代模型:我们展示了如何结合低保真”粗“求解器对于产生“赋予促进”输入的神经网络,培训端到端以全局匹配昂贵的高保真数值求解器的输出。通过这种方式,通过在低保真模型的形式中纳入有限的物理知识,我们发现可以使用比“黑匣子”神经网络的数据至少为$ \ SIM 10 \倍。出于相同的准确性。渐近的,PED似乎与黑匣子代理人一起学习,并且在与主动学习结合时,甚至进一步利益。我们通过使用电磁散射中的示例问题展示了所提出的方法的可行性和益处出现在光学超材料的设计中。
translated by 谷歌翻译
着名的工作系列(Barron,1993; Bresiman,1993; Klusowski&Barron,2018)提供了宽度$ N $的界限,所需的relu两层神经网络需要近似函数$ f $超过球。 \ mathcal {b} _r(\ mathbb {r} ^ d)$最终$ \ epsilon $,当傅立叶的数量$ c_f = \ frac {1} {(2 \ pi)^ {d / 2}} \ int _ {\ mathbb {r} ^ d} \ | \ xi \ | ^ 2 | \ hat {f}(\ xi)| \ d \ xi $是有限的。最近ongie等。 (2019)将Radon变换用作分析无限宽度Relu两层网络的工具。特别是,他们介绍了基于氡的$ \ mathcal {r} $ - norms的概念,并显示$ \ mathbb {r} ^ d $上定义的函数可以表示为无限宽度的双层神经网络如果只有在$ \ mathcal {r} $ - norm是有限的。在这项工作中,我们扩展了Ongie等人的框架。 (2019)并定义类似的基于氡的半规范($ \ mathcal {r},\ mathcal {r} $ - norms),使得函数承认在有界开放式$ \ mathcal上的无限宽度神经网络表示{ u} \ subseteq \ mathbb {r} ^ d $当它$ \ mathcal {r}时,\ mathcal {u} $ - norm是有限的。建立在这方面,我们派生稀疏(有限宽度)神经网络近似界,其优化Breiman(1993); Klusowski&Barron(2018)。最后,我们表明有限开放集的无限宽度神经网络表示不是唯一的,并研究其结构,提供模式连接的功能视图。
translated by 谷歌翻译
Models based on machine learning can enable accurate and fast molecular property predictions, which is of interest in drug discovery and material design. Various supervised machine learning models have demonstrated promising performance, but the vast chemical space and the limited availability of property labels make supervised learning challenging. Recently, unsupervised transformer-based language models pretrained on a large unlabelled corpus have produced state-of-the-art results in many downstream natural language processing tasks. Inspired by this development, we present molecular embeddings obtained by training an efficient transformer encoder model, MoLFormer, which uses rotary positional embeddings. This model employs a linear attention mechanism, coupled with highly distributed training, on SMILES sequences of 1.1 billion unlabelled molecules from the PubChem and ZINC datasets. We show that the learned molecular representation outperforms existing baselines, including supervised and self-supervised graph neural networks and language models, on several downstream tasks from ten benchmark datasets. They perform competitively on two others. Further analyses, specifically through the lens of attention, demonstrate that MoLFormer trained on chemical SMILES indeed learns the spatial relationships between atoms within a molecule. These results provide encouraging evidence that large-scale molecular language models can capture sufficient chemical and structural information to predict various distinct molecular properties, including quantum-chemical properties.
translated by 谷歌翻译
隐式和明确的生成建模的几种作品经验观察到特征学习鉴别器在模型的样本质量方面优于固定内核鉴别器。我们在使用函数类$ \ mathcal {f} _2 $和$ \ mathcal {f} _1 $分别在使用函数类$ \ mathcal {f} _2 $分别提供分离结果。 。特别地,我们构造了通过固定内核$(\ Mathcal {F} _2)$积分概率度量(IPM)和高维度的超积分(\ Mathcal {F} _2)和高维度差异(SD)的超领域的分布对。但是可以是由他们的特征学习($ \ mathcal {f} _1 $)对应物歧视。为了进一步研究分离,我们提供$ \ mathcal {f} _1 $和$ \ mathcal {f} _2 $ IMM之间的链接。我们的工作表明,固定内核鉴别者的表现比其特征学习对应者更糟糕,因为它们的相应度量较弱。
translated by 谷歌翻译
了解深度神经网络的泛化是深度学习中最重要的任务之一。虽然已经取得了很大进展,但理论错误界限仍然往往与经验观察结果不同。在这项工作中,我们开发基于保证金的泛化界,其中边距是在从训练分布中采样的独立随机子集之间的最佳运输成本标准化。特别地,最佳运输成本可以被解释为方差的概念,其捕获学习特征空间的结构特性。我们的界限强大地预测了在大规模数据集上给定培训数据和网络参数的泛化误差。从理论上讲,我们表明特征的浓度和分离在泛化中起着至关重要的作用,支持文献中的经验结果。该代码可用于\ url {https:/github.com/chingyaoc/kv-margin}。
translated by 谷歌翻译
渐变流是一种强大的工具,用于优化一般度量空间中的功能,包括赋予WasserseIn度量标准的概率空间。解决这种优化问题的典型方法依赖于它与最佳运输的动态配方的连接和庆祝的Jordan-KinderLehrer-Otto(JKO)方案。然而,该制剂涉及优化凸起功能,这是具有挑战性的,尤其是高维度。在这项工作中,我们提出了一种依赖于最近引入的输入 - 凸神经网络(ICNN)的方法,以参加凸起功能的空间,以便近似JKO方案,以及在享受收敛保证的措施中设计功能。我们推出了这种JKO-ICNN框架的计算上有效的实现,并通过了解具有已知解决方案的低维局部微分方程的近似解的可行性和有效性。我们还通过对分子发现的受控生成的实验展示其在高维应用中的可行性。
translated by 谷歌翻译